Beyond Feedforward Models Trained by Backpropagation: A Practical Training Tool for a More Efficient Universal Approximator
نویسندگان
چکیده
منابع مشابه
a new approach to credibility premium for zero-inflated poisson models for panel data
هدف اصلی از این تحقیق به دست آوردن و مقایسه حق بیمه باورمندی در مدل های شمارشی گزارش نشده برای داده های طولی می باشد. در این تحقیق حق بیمه های پبش گویی بر اساس توابع ضرر مربع خطا و نمایی محاسبه شده و با هم مقایسه می شود. تمایل به گرفتن پاداش و جایزه یکی از دلایل مهم برای گزارش ندادن تصادفات می باشد و افراد برای استفاده از تخفیف اغلب از گزارش تصادفات با هزینه پائین خودداری می کنند، در این تحقیق ...
15 صفحه اولThe Learning Dynamcis of a Universal Approximator
The learning properties of a universal approximator, a normalized committee machine with adjustable biases, are studied for on-line back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases even f...
متن کاملBeyond Backpropagation: Using Simulated Annealing for Training Neural Networks
The vast majority of neural network research relies on a gradient algorithm, typically a variation of backpropagation, to obtain the weights of the model. Because of the enigmatic nature of complex nonlinear optimization problems, such as training artificial neural networks, this technique has often produced inconsistent and unpredictable results. To go beyond backpropagation’s typical selectio...
متن کاملA Practical Bayesian Framework for Backpropagation Networks
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penal...
متن کاملA general backpropagation algorithm for feedforward neural networks learning
A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Neural Networks
سال: 2008
ISSN: 1045-9227,1941-0093
DOI: 10.1109/tnn.2008.2000396